2,454 research outputs found

    A novel plasticity rule can explain the development of sensorimotor intelligence

    Full text link
    Grounding autonomous behavior in the nervous system is a fundamental challenge for neuroscience. In particular, the self-organized behavioral development provides more questions than answers. Are there special functional units for curiosity, motivation, and creativity? This paper argues that these features can be grounded in synaptic plasticity itself, without requiring any higher level constructs. We propose differential extrinsic plasticity (DEP) as a new synaptic rule for self-learning systems and apply it to a number of complex robotic systems as a test case. Without specifying any purpose or goal, seemingly purposeful and adaptive behavior is developed, displaying a certain level of sensorimotor intelligence. These surprising results require no system specific modifications of the DEP rule but arise rather from the underlying mechanism of spontaneous symmetry breaking due to the tight brain-body-environment coupling. The new synaptic rule is biologically plausible and it would be an interesting target for a neurobiolocal investigation. We also argue that this neuronal mechanism may have been a catalyst in natural evolution.Comment: 18 pages, 5 figures, 7 video

    Information driven self-organization of complex robotic behaviors

    Get PDF
    Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well.Comment: 29 pages, 12 figure

    Higher coordination with less control - A result of information maximization in the sensorimotor loop

    Full text link
    This work presents a novel learning method in the context of embodied artificial intelligence and self-organization, which has as few assumptions and restrictions as possible about the world and the underlying model. The learning rule is derived from the principle of maximizing the predictive information in the sensorimotor loop. It is evaluated on robot chains of varying length with individually controlled, non-communicating segments. The comparison of the results shows that maximizing the predictive information per wheel leads to a higher coordinated behavior of the physically connected robots compared to a maximization per robot. Another focus of this paper is the analysis of the effect of the robot chain length on the overall behavior of the robots. It will be shown that longer chains with less capable controllers outperform those of shorter length and more complex controllers. The reason is found and discussed in the information-geometric interpretation of the learning process

    Wavelet analysis of EEG signals as a tool for the investigation of the time architecture of cognitive processes

    Get PDF
    Cognitive processes heavily rely on a dedicated spatio-temporal architecture of the underlying neural system - the brain. The spatial aspect is substantiated by the modularization as it has been brought to light in much detail by recent sophisticated neural imaging investigations. The time aspect is less well investigated although the role of time is prominent in several approaches to understanding the organization of the information processing in the brain. By way of example we mention (i) the synchronization hypothesis for the resolution of the binding problem, cf. [5] [4], [3] and the efforts to relate the information contained in observed spike rates back to the neuronal mechanisms underlying the cognitive event. In particular, in Refs. [1], [2] Amit et. al. tried to bridge the gap between the Miyashita data [10] and the hypothesis that associative memory is realized by the (strange) attractor states of dynamical systems

    Self-adjusting reinforcement learning

    Get PDF
    We present a variant of the Q-learning algorithm with automatic control of the exploration rate by a competition scheme. The theoretical approach is accompanied by systematic simulations of a chaos control task. Finally, we give interpretations of the algorithm in the context of computational ecology and neural networks

    Local online learning of coherent information

    Get PDF
    One of the goals of perception is to learn to respond to coherence across space, time and modality. Here we present an abstract framework for the local online unsupervised learning of this coherent information using multi-stream neural networks. The processing units distinguish between feedforward inputs projected from the environment and the lateral, contextual inputs projected from the processing units of other streams. The contextual inputs are used to guide learning towards coherent cross-stream structure. The goal of all the learning algorithms described is to maximize the predictability between each unit output and its context. Many local cost functions may be applied: e.g. mutual information, relative entropy, squared error and covariance. Theoretical and simulation results indicate that, of these, the covariance rule (1) is the only rule that specifically links and learns only those streams with coherent information, (2) can be robustly approximated by a Hebbian rule, (3) is stable with input noise, no pairwise input correlations, and in the discovery of locally less informative components that are coherent globally. In accordance with the parallel nature of the biological substrate, we also show that all the rules scale up with the number of streams

    Efficient Q-Learning by Division of Labor

    Get PDF
    Q-learning as well as other learning paradigms depend strongly on the representation of the underlying state space. As a special case of the hidden state problem we investigate the effect of a self-organizing discretization of the state space in a simple control problem. We apply the neural gas algorithm with adaptation of learning rate and neighborhood range to a simulated cart-pole problem. The learning parameters are determined by the ambiguity of successful actions inside each cell

    A numerical projection technique for large-scale eigenvalue problems

    Full text link
    We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.Comment: 7 pages, 4 figure
    corecore